Warning You have stumbled into a public, but unlaunched repository. Version numbers currently mean nothing. Backwards compatibility will break without changes in version numbers until we launch v0.1.0 (soon).
In short, Lace is a machine learning tool for people who want to learn about, and understand their data.
Lace is a probabilistic cross-categorization engine written in rust with an optional interface to python. Unlike traditional machine learning methods, which learn some function mapping inputs to outputs, Lace learns a joint probability distribution over your dataset, which enables users to...
- predict or compute likelihoods of any number of features conditioned on any number of other features
- identify, quantify, and attribute uncertainty from variance in the data, epistemic uncertainty in the model, and missing features
- determine which variables are predictive of which others
- determine which records/rows are similar to which others on the whole or given a specific context
- simulate and manipulate synthetic data
- work natively with missing data and make inferences about missingness (missing not-at-random)
- work with continuous and categorical data natively, without transformation
- identify anomalies, errors, and inconsistencies within the data
- edit, backfill, and append data without retraining
and more, all in one place, without any explicit model building.
The Problem
The goal of lace is to fill some of the massive chasm between standard machine learning (ML) methods like deep learning and random forests, and statistical methods like probabilistic programming languages. We wanted to develop a machine that allows users to experience the joy of discovery, and indeed optimizes for it.
Short version
Standard, optimization-based ML methods don't help you learn about your data. Probabilistic programming tools assume you already have learned a lot about your data. Neither approach is optimized for what we think is the most important part of data science: the science part: asking and answering questions.
Long version
Standard ML methods are easy to use. You can throw data into a random forest and start predicting with little thought. These methods attempt to learn a function f(x) -> y that maps inputs x, to outputs y. This ease-of-use comes at a cost. Generally f(x) does not reflect the reality of the process that generated your data, but was instead chosen by whoever developed the approach to be sufficiently expressive to better achieve the optimization goal. This renders most standard ML completely uninterpretable and unable to yield sensible uncertainty estimate.
On the other extreme you have probabilistic tools like probabilistic programming languages (PPLs). A user specifies a model to a PPL in terms of a hierarchy of probability distributions with parameters θ. The PPL then uses a procedure (normally Markov Chain Monte Carlo) to learn about the posterior distribution of the parameters given the data p(θ|x). PPLs are all about interpretability and uncertainty quantification, but they place a number of pretty steep requirements on the user. PPL users must specify the model themselves from scratch, meaning they must know (or at least guess) the model. PPL users must also know how to specify such a model in a way that is compatible with the underlying inference procedure.
Who should not use lace
There are a number of use cases for which Lace is not suited
- Non-tabular data such as images and text
- Highly optimizing specific predictions
- Lace would rather over-generalize than over fit.
Quick start
Install the CLI and pylace (requires rust and cargo)
$ cargo install --locked lace
$ pip install py-lace
First, use the CLI to fit a model to your data
$ lace run --csv satellites.csv -n 5000 -s 32 --seed 1337 satellites.lace
Then load the model and start asking questions
>>> from lace import Engine
>>> engine = Engine(metadata='satellites.lace')
# Predict the class of orbit given the satellite has a 75-minute
# orbital period and that it has a missing value of geosynchronous
# orbit longitude, and return epistemic uncertainty via Jensen-
# Shannon divergence.
>>> engine.predict(
... 'Class_of_Orbit',
... given={
... 'Period_minutes': 75.0,
... 'Longitude_of_radians_geo': None,
... },
... )
OUTPUT
# Find the top 10 most surprising (anomalous) orbital periods in
# the table
>>> engine.surprisal('Period_minutes') \
... .sort_by('surprisal', ascending=False) \
... .head(10)
OUTPUT
And similarly in rust:
use lace::prelude::*; fn main() { let mut engine = Engine::load("satellites.lace").unrwap(); // Predict the class of orbit given the satellite has a 75-minute // orbital period and that it has a missing value of geosynchronous // orbit longitude, and return epistemic uncertainty via Jensen- // Shannon divergence. engine.predict( "Class_of_Orbit", &Given::Conditions(vec![ ("Period_minutes", Datum:Continuous(75.0)), ("Longitude_of_radians_geo", Datum::Missing), ]), Some(PredictUncertaintyType::JsDivergence), None, ) }
License
Lace is licensed under Server Side Public License (SSPL).
If you would like a license for use in closed source code please contact
lace@promised.ai
Installation
Installation requires rust, which you can get here.
CLI
The lace CLI is installed with rust via the command
$ cargo install --locked lace
Rust crate
To use the lace crate in a rust project add the following line under the
dependencies block in your Cargo.toml:
lace = "<version>"
Python
The python library can be installed with pip
pip install
The lace workflow
The typical workflow consists of two or three steps:
Step 1 is optional in many cases as Lace usually does a good job of inferring the types of your data. The condensed workflow looks like this.
Create an optional codebook using the CLI.
$ lace codebook --csv data.csv codebook.yaml
Run a model.
$ lace run --csv data.csv --codebook codebook.yaml -n 5000 metadata.lace
Open the model in lace
import lace
engine = lace.Engine(metadata='metadata.lace')
#![allow(unused)] fn main() { use lace::Engine; let engine = Engine::load("metadata.lace")?; }
Create and edit a codebook
The codebook contains information about your data such as the row and column names, the types of data in each column, how those data should be modeled, and all the prior distributions on various parameters.
The default codebook
In the lace CLI, you have the ability to initialize and run a model without specifying a codebook.
$ lace run --csv data -n 5000 metadata.lace
Behind the scenes, lace creates a default codebook by inferring the types of your columns and creating a very broad (but not quite broad enough to satisfy the frequentists) hyper prior, which is a prior on the prior.
Creating a template codebook
Lace is happy to generate a default codebook for you when you initialize a model. You can create and save the default codebook to a file using the CLI. To create a codebook from a CSV file:
$ lace codebook --csv data.csv codebook.yaml
If you use a data format with a schema, such as Parquet or IPC (feather v2), you make Lace's work a bit easier.
$ lace codebook --ipc data.feather codebook.yaml
If you want to make changes to the codebook -- the most common of which are editing the Dirichlet process prior, specifying whether certain columns are missing not-at-random, adjusting the prior distributions and disabling hyper priors -- you just open it up in your text editor and get to work.
For example, let's say we wanted to make a column of the satellites data set missing not-at-random, we first create the template codebook,
$ lace codebook --csv satellites.csv codebook-sats.yaml
open it up in a text editor and find the column of interest
- name: longitude_radians_of_geo
coltype: !Continuous
hyper:
pr_m:
mu: 0.21544247097911842
sigma: 1.570659039531299
pr_k:
shape: 1.0
rate: 1.0
pr_v:
shape: 6.066108090103747
scale: 6.066108090103747
pr_s2:
shape: 6.066108090103747
scale: 2.4669698184613824
prior: null
notes: null
missing_not_at_random: false
and change the column metadata to something like this:
- name: longitude_radians_of_geo
coltype: !Continuous
hyper:
pr_m:
mu: 0.21544247097911842
sigma: 1.570659039531299
pr_k:
shape: 1.0
rate: 1.0
pr_v:
shape: 6.066108090103747
scale: 6.066108090103747
pr_s2:
shape: 6.066108090103747
scale: 2.4669698184613824
prior: null
notes: "This value is only defined for GEO satellites"
missing_not_at_random: true
For a complete list of codebook fields, see the reference.
Run/train/fit a model
Lace is a Bayesian tool so we do posterior sampling via Markov chain Monte Carlo (MCMC). A typical machine learning model with use some sort of optimization method to find one model that fits best; the objective for fitting is different in Redpoll.
In Lace we use a number of states (or samples), each running MCMC independently to characterize the posterior distribution of the model parameters given the data. Posterior sampling isn't meant to maximize the fit to a dataset, it is meant to help understand the conditions that created the data.
When you fit to your data in Lace, you have options to run a set number of
states for a set number of iterations (limited by a timeout). Each state is a
posterior sample. More states is better, but the run time of everything
increases linearly with the number of states; not just the fit, but also the
oracle operations like logp and simulate. As a rule of thumb, 32 is a
good default number of states. But if you find your states tend to strongly
disagree on things, it is probably a good idea to add more states to fill in
the gaps.
As for number of iterations, you will want to monitor your convergence plots. There is no benefit of early stopping like there is with neural networks. MCMC will usually only do better the longer you run it.

The above figure shows the MCMC algorithm partitioning a dataset into views and categories.
A (potentially useless) analogy comparing MCMC to optimization
At the risk of creating more confusion than we resolve, let us make an analogy to mountaineering. You have two mountaineers: a gradient ascent (GA) mountaineer and an MCMC mountaineer. You place each mountaineer at a random point in the Himalayas and say "go". GA's goal is to find the peak of Everest. Its algorithm for doing so is simply always to go up and never to go down. GA is guaranteed to find a peak, but unless it is very lucky in its starting position, it is unlikely ever to summit Everest.
MCMC has a different goal: to map the mountain range (posterior distribution). It does this by always going up, but sometimes going down if it doesn't end up too low. The longer MCMC explores, the better understanding it has about the Himalayas, an understanding which likely includes the position of the peak of Everest.
While GA achieves its goal quickly, it does so at the cost of understanding the terrain, which in our analogy represents the knowledge within our data.
In Lace we place a troop of mountaineers in the mountain range of our posterior distribution. We call individuals mountaineers states or samples, or chains. Our hope is that our mountaineers can sufficently map the information in our data. Of course the ability of the mountaineers to build this map depends on the size of the space (which is related to the size of the data) and the complexity of the space (the intricacy of the underlying process)
In general the posterior of a Dirichlet process mixture is indeed much like the Himalayas there are many, many peaks (modes), which makes the mountaneer's job difficult. Certain MCMC kernels do better in certain cicumstances, and a employing a variety of kernels leads to better resuls.
Our MCMC Kernels
The vast majority of the fitting runtime is updating the row-category assignment and the column-view assignment. Other updates such as feature components parameters, CRP parameters, and prior parameters, take an insignificant amount of time. Here we discuss the MCMC kernels responsible for the vast mjaority of mixing in Lace: the row and column reassignment kerenels:
Row kernels
- slice: Proposes reassignment for each row to an instantiated category or
one of many new, empty categories. Slice is good for small tweaks in the
assignment, and it is very fast. When there are a lot of rows,
slicecan have difficulty creating new categories. - gibbs: Proposes reassignment of each row sequentially. Generally makes
larger moves than
slice. Because it is sequential, and access data in random order,gibbsis very slow. - sams: Proposes mergers and splits of categories. Only considers the rows in
one or two categories. Proposed large moves, but cannot make the fine
tweaks that
sliceandgibbscan. Since it proposes big moves, its proposals are often rejected for as the run progresses and the state is already fitting fairly well.
Column kernels
The column kernels are generally adapted from the row kernels with some caveats.
- slice: Same as the row kernel, but over columns.
- gibbs: The same structurally as the row kernel, but uses random seed control to implement parallelism.
Gibbs is a good choice if the number of columns is high and mixing is a concern.
Convergence
When training a neural network, we monitor for convergence in the error or loss. When, say, we see diminishing returns in our loss function with each Epoch it is time to stop. Convergence in MCMC is a bit different. We say our Markov Chain has converged when it has settled into a situation in which it is producing draws from the posterior distribution. In the beginning state of the chain, it is rapidly moving away from the low probability area in which it was initialized and into the higher probability areas more representative of the posterior.
To monitor convergence, we observe the score (which is proportional to the likelihood) over time. If the score stops increasing and begins to ocillate, one of two things has happened: we have settled into the posterior distribution, or the Markov Chain has gotten stuck on an island of high likelihood. When a model is identifiable (meaning that each unique parameter set creates a unique model) the posterior distribution is unimodal, which means there is only one peak, which is easily mapped.
Above. Score by MCMC kernel step in the Animals dataset. Gray lines represent the scores of parallel Markov Chains; the Black line is their mean.
Above. Score by MCMC kernel step in the Satellites dataset. Gray lines represent the scores of parallel Markov Chains; the Black line is their mean. Note that some of the Markov Chains experience sporadic jumps upward. This is the MCMC kernel hopping to a higher probability mode.
A Bayesian modeler must make a compromises between expressiveness, interpretability, and indetifiability. A modeler may transform variables to create a more well-behaved posterior at the cost of the model being interpretable. The modeler may also achieve identifiablity by reducing the complexity of the model at the cost of failing to capture certain phenomena.
To be general, a model must be expressive; and to be safe, a model must be interpretable. We have chosen to favor general applicability and interpretability over identifiablity. We fight against multimodality in three ways: deploying MCMC algorithms that are better at hopping between modes, by running many Markov Chains in parallel, and by being interpretable.
There are many metrics for convergence but none of the them are practically useful in models of this complexity. Instead we encourage users to monitor convergece via the score and by smell-testing the model. If your model is failing to pick up obvious dependencies, or is missing out on obvious intuitions, you should run it more.
Conduct an analysis
You've made a codebook, you've fit a model, now you're ready to do learn.
Let's use the built-in examples to walk through some key concepts. The
Animals example isn't the biggest, or most complex, and that's exactly why
it's so great. People have acquired a ton of intuition about animals like how
and why you might categorize animals into a taxonomy, and why animals have
certain features and what that might tell us about other features of animals.
This means, that we can see if lace recovers our intuition.
from lace import examples
# if this is your first time using the example, lace must
# build the metadata
animals = examples.Animals()
use lace::examples::Example
// You can create an Engine or an Oracle. An Oracle is
// basically an immutable Engine. You cannot add/edit data or
// extend runs (update).
let animals = Example::Animals.engine().unwrap();
Usually, the first question we want to ask of a new dataset is "What questions can I answer?" This is a question about statistical dependence. Which features of our dataset share statistical dependence with which others? This is closely linked with the question "which things can I predict given which other things?"
In python, we can generate a plotly heatmap of dependence probability.
animals.clustermap(
'depprob',
color_continuous_scale='greys',
zmin=0,
zmax=1
).figure.show()
In rust, we ask about dependence probabilities between individual pairs of features
#![allow(unused)] fn main() { depprob_flippers = animals.depprob( "swims", "flippers", ).unwrap() }
Probabilistic Cross Categorization
Lace is built on a Bayesian probabilistic model called Probabilistic Cross-Categorization (PCC). PPC groups \(m\) columns into \(1, ..., m\) views, and within each view, groups the \(n\) rows into \(1, ..., n\) categories. PCC uses a non-parametric prior process (the Dirichlet process) to learn the number of view and categories. Each column (feature) is then modeled as a mixture distribution defined by the category partition. For example, a continuous-valued column will be modeled as a mixture of Gaussian distributions. For references on PCC, see the appendix.
Differences between PCC and Traditional ML
Input Data
Most ML models are designed to handle one type of data, generally continuous.
This means if you have categorical data, you have to transform it: you can call
float(x) and just sweep the categorical-ness of the data under the rug, you
can do something like one-hot
encoding, which significantly increases
dimensionality, or you can use some kind of embedding like in
natural language processing,
which destroys interpretability. PCC allows your data to stay as they are.
The learning target
Most of the machine learning models we all know and love are formalized in terms of learning some unknown function \(f(x) \rightarrow y\), where \(x\)) are inputs and \(y\) are outputs, by optimizing with respect to some objective (e.g. error; cross-entropy). This formalization locks machine learning models into rigid use cases, e.g. predict "this" given "that". PCC attempts to learn a joint probability distribution, \(f(x_1, x_2, ..., x_n)\), where the \(x\)'s are features. From this joint distribution, the user can construct any conditional distribution (e.g., \(p(x_1, x_2 | x_3 )\)).
The learning method
Most machine learning models use an optimization algorithm to find a set of parameters that achieves a local minima in the loss function. For example, Deep Neural Networks may use stochastic gradient descent to minimize cross entropy. This results in one parameter set representing one model.
In Lace, we use Markov Chain Monte Carlo to do posterior sampling. That is, we attempt to draw a number of PCC states from the posterior distribution. These states provide a kind of topographical map of the PCC posterior distribution which we can use to do a number of things including computing likelihoods and uncertainties.
Dependence probability
The dependence probability (often referred to in code as depprob) between two columns, x and y, is the probability that there is a path of statistical dependence between x and y. The technology underlying the Lace platform clusters columns into views. Each state has an individual clustering of columns. The dependence probability is the proportion of states in which x and y are in the same view,
\[ D(X; Y) = \frac{1}{|S|} \sum_{s \in S} [z^{(s)}_x = z^{(s)}_y] \]
where S is the set of states and z is the assignment of columns to views.
Above. A dependence probability clustermap. Each cell depresents the probability of dependence between two columns. Zero is white and black is one. The dendrogram, generated by seaborn, clusters mutual dependent columns.
It is important to note that dependence probability is meant to tell you whether a dependence exists; it does not necessarily provide information about the strength of dependencies. Dependence probability could potentially be high between independent columns if they are linked by dependent columns. For example, in the three-variable model
graph TD;
X-->Z;
Y-->Z;
all three columns will be in the same view since Z is dependent on both X and Y, so there will be a high dependence probability between X and Y even though they are statistically dependent, but they are dependent given Z.
Dependence probability is the go-to for structure modeling because it is fast to compute and well-behaved for all data. If you need more information about the strength of dependencies, use mutual information.
Mutual information
Mutual information (often referred to in code as mi) is a measure of the information shared between two variables. Is is mathematically defined as
\[ I(X;Y) = \sum_{x \in X} \sum_{y \in Y} p(x,y) \log \frac{p(x, y)}{p(x)p(y)}, \]
or in terms of entropies,
\[ I(X;Y) = H(X) - H(X|Y). \]
Mutual information is well behaved for discrete data types (count and categorical), for which the sum applies; but for continuous data types for which the sum becomes an integral, mutual information can break down because differential entropy is no longer guaranteed to be positive.
For example, the following plots show the dependence probability and mutual information heatmaps for the zoo dataset, which is composed entirely of binary variables:
from lace import examples
animals = examples.Animals()
animals.clustermap('depprob', color_continuous_scale='greys', zmin=0, zmax=1)
Above. A dependence probability cluster map for the Animals dataset.
animals.clustermap('mi', color_continuous_scale='greys')
Above. A mutual information clustermap. Each cell represents the Mutual Information between two columns. Note that compared to dependence probability, the matrix is quite sparse. Also note that the diagonal entries are the entropies for each column.
And below are the dependence probability and mutual information heatmaps of the satellites dataset, which is composed of a mixture of categorical and continuous variables:
satellites = examples.Satellites()
satellites.clustermap('depprob', color_continuous_scale='greys', zmin=0, zmax=1)
Above. The dependence probability cluster map for the satellites date set.
satellites.clustermap('mi', color_continuous_scale='greys')
Above. The normalized mutual information cluster map for the satellites date set. Note that the values are no longer bounded between 0 and 1 due to inconsistencies caused by differential entropies.
satellites.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'linfoot'}
)
Above. The Linfoot-transformed mutual information cluster map for the satellites date set. The Linfoot information transformation often helps to mediate the weirdness that can arise from differential entropy.
Normalization Methods
Mutual information can be difficult to interpret because it does not have well-behaved bounds. In all but the continuous case (in which the mutual information could be negative), mutual information is only guaranteed to be greater than zero. To create an upper bound, we have a number of options:
Normalized
Knowing that the mutual information cannot exceed the minimum of the total information in (the entropy of) either X or Y, we can normalize by the minimum of the two component entropies:
\[ \hat{I}(X;Y) = \frac{I(X; Y)}{\min \left[H(X), H(Y) \right]} \]
animals.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'normed'}
)
Above. Normalized mutual information cluster map for the animals dataset.
IQR
In the Information Quality Ratio (IQR), we normalize by the joint entropy.
\[ \hat{I}(X;Y) = \frac{I(X; Y)}{H(X, Y)} \]
animals.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'iqr'}
)
Above. IQR Normalized mutual information cluster map for the animals dataset.
Jaccard
To compute the Jaccard distance, we subtract the IQR from 1. Thus, columns with more shared information have smaller distance
\[ \hat{I}(X;Y) = 1 - \frac{I(X; Y)}{H(X, Y)} \]
animals.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'jaccard'}
)
Above. Jaccard distance cluster map for the animals dataset.
Pearson
To compute something akin to the Pearson Correlation coefficient, we normalize by the square root of the product of the component entropies:
\[ \hat{I}(X;Y) = \frac{I(X; Y)}{\sqrt{H(X) H(Y)}} \]
animals.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'pearson'}
)
Above. Pearson normalized mutual information matrix cluster map for the animals dataset.
Linfoot
Linfoot information is the solution to solving for the correlation between the X and Y components of a bivariate Gaussian distribution with given mutual information.
\[ \hat{I}(X;Y) = \sqrt{ 1 - \exp(2 - I(X;Y)) } \]
animals.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'linfoot'}
)
Linfoot is often the most well-behaved normalization method especially when using continuous variables.
Above. Linfoot information matrix cluster map for the animals dataset.
Variation of Information
The Variation of Information (VOI) is a metric typically used to determine the distance between two clustering of variables, but we can use it generally to transform mutual information into a valid metric.
\[ \text{VI}(X;Y) = H(X) + H(Y) - 2,I(X,Y) \]
animals.clustermap(
'mi',
color_continuous_scale='greys',
fn_kwargs={'mi_type': 'voi'}
)
Above. Variation of information matrix.
Row similarity
Row similarity is (referred to in code as rowsim) a measurement of the similarity between rows. But row similarity is not a measurement of the distance between the values of the rows, but is a measure of how similarly the values of two rows are modeled. Row similarity is not a measurement in the data space, but in the model space. As such, we do not need to worry about coming up with an appropriate distance metric that incorporates data of different types, and we do not need to fret about missing data.
Rows whose values are modeled more similarly will have higher row similarity.
The technology underlying the Lace platform clusters columns into views, and within each view, clusters rows into categories. The row similarity is the average over states of the proportion of views in a state in which the two rows are in the same category.
\[ RS(A, B) \frac{1}{S} \sum_{s \in S} \frac{1}{V_s}\sum_{v \in V_s} [v_a = v_b] \]
Where S is the set of states, Vs is the set of assignments of rows in views to categories, and va is the assignment of row a in a particular view.
Column-weighted variant
One may wish to weight by the size of the view. For example, if 99% of the columns are in one view, and the two rows are together in the large view, but not the small view, we would like a row similarity of 99%, not 50%. For this reason, there is a column-weighted variant, which can be accessed by way of an extra argument to the rowsim function.
\[ \bar{RS}(A, B) \frac{1}{S} \sum_{s \in S} \sum_{v \in V_s} \frac{|C_v|}{|C|} [v_a = v_b] \]
where C is the set of all columns in the table and Cv is the number of columns in a given view, v.
We can see the effect of column weighting when computing the row similarity of animals in the zoo dataset.
from lace import examples
animals = examples.Animals()
animals.clustermap('rowsim', color_continuous_scale='greys', zmin=0, zmax=1)
Above. Standard row similarity for the animals data set.
animals.clustermap(
'rowsim',
color_continuous_scale='greys',
zmin=0,
zmax=1,
fn_kwargs={'col_weighted': True}
)
Above. Column-weighted row similarity for the animals data set. Note that the clusters are more pronounced.
Contextualization
Often, we are not interested in aggregate similarity over all variables, but in similarity with respect to specific target variables. For example, if we are an seeds company looking to determine where certain seeds will be more effective, we might not want to compute row similarity of locations across all variables, but might be more interested in row similarity with respect to yield.
Contextualized row similarity (usually via the wrt [with respect to] argument) is computed only over the views containing the columns of interest. When contextualizing with a single column, column-weighted and standard row similarity are equivalent.
animals.clustermap(
'rowsim',
color_continuous_scale='greys',
zmin=0,
zmax=1,
fn_kwargs={'wrt': ['swims']}
)
Above. Row similarity for the animals data set with respect to the swims variable. Animals that swim are colored blue. Animals that do not are colored tan. Note that if row similarity were looking at just the values of the data, similarity would either be zero (similar) or one (dissimilar) because the animals data are binary and we are looking it only one column. But row similarity here captures nuanced information about how swims is modeled. We see that withing the animals that swims, there are two distinct clusters of similarity. There are animals like the dolphin and killer whale that live their lives in the water, and there are animals like the polar bear and hippo that just visit. Both of these groups of animals swim, but for each group, Lace predicts that they swim for different reasons.
Prediction & Imputation
Prediction and imputation both involve inferring an unknown quantity. Imputation refers to inferring the value of a specific cell in our table, and prediction refers to inferring a hypothetical value.
The arguments for impute are the coordinates of the cell. We may wish to impute the cell at row bat and column furry. The arguments for prediction are the conditions we would like to use to create the conditional distribution. We may wish to predict furry given flys=True, brown=True, and fierce=False.
Uncertainty
Uncertainty comes from several sources (to learn more about those sources, check out this blog post):
- Natural noise/imprecision/variance in the data-generating process
- Missing data and features
- Difficulty on the part of the model to capture a prediction
Type 1 uncertainty can be captured by computing the predictive distribution variance (or entropy for categorical targets). You can also visualize the predictive distribution. Observing multi-modality (multiple peaks in the distribution) can be a good indication that you are missing valuable information.
Determining how certain the model is in its ability to capture a prediction is done by assessing the consensus among the predictive distribution emitted by each state. The more alike these distributions are, the more certain the model is in its ability to capture a prediction.
Mathematically, uncertainty is formalized as the Jensen-Shannon divergence (JSD) between the state-level predictive distributions. Uncertainty goes from 0 to 1, 0 meaning that there is only one way to model a prediction, and 1 meaning that there are many ways to model a prediction and they all completely disagree.

Above. Prediction uncertainty when predicting Period_minutes of a satellite in the satellites data set. Note that the uncertainty value here is driven mostly by the difference variances of the state-level predictive distributions.
Certain ignorance is when the model has zero data by which to make a prediction and instead falls back to the prior distribution. This is rare, but when it happens it will be apparent. To be as general as possible, the priors for a column's component distributions are generally much more broad than the predictive distribution, so if you see a predictive distribution that is senselessly wide and does not looks like the marginal distribution of that variable (which should follow the histogram of the data), you have a certain ignorance. The fix is to fill in the data for items similar to the one you are predicting.
Surprisal
Surprisal is a method by which users may find surprising (go figure) data such as outliers, anomalies, and errors.
In information theory
In information theoretic terms, "surprisal" (also referred to as self-information, information content, and potentially other things) is simply the negative log likelihood.
\[ s(x) = -\log p(x) \ \]
\[ s(x|y) = -\log p(x|y) \]
In Lace
In the Lace Engine, you have the option to call engine.surprisal and the option
the call -engine.logp. There are differences between these two calls:
engine.surprisal takes a column as the first argument and can take optional row
indices and values. engine.surprisal computes the information theoretic surprisal
of a value in a particular position in the Lace table. engine.surprisal considers
only existing values, or hypothetical values at specific positions in the
table.
-engine.logp considers hypothetical values only. We provide a set of inputs and
conditions and as 'how surprised would we be if we saw this?'
Interpreting surprisal values
Surprisal is not normalized insofar as the likelihood is not normalized. For discrete distributions, surprisal will always be positive, but for tight continuous distributions that can have likelihoods greater than 1, surprisal can be negative. Interpreting the raw surprisal values is simply a matter of looking at which values are higher or lower and by how much.
Transformations may not be very valuable. The surprised distribution is usually very far from capital 'N' Normal (Gaussian).
import plotly.express as px
from lace.examples import Satellites
engine = Satellites()
surp = engine.surprisal('Period_minutes')
# plotly support for polars isn't currently great
fig = px.histogram(surp.to_pandas(), x='surprisal')
fig.show()
Lots of skew in this distribution. The satellites example is especially nasty because there are a lot of extremes when we're talking about spacecraft.
Simulating data
If you've used logp, you already understand how to simulate data. In both
logp and simulate you define a distribution. In logp the output is an
evaluation of a specific point (or points) in the distribution; in simulate
you generate from the distribution.
We can simulate from joint distributions
from lace.examples import Animals
animals = Animals()
swims = animals.simulate(['swims'], n=10)
Or we can simulate from conditional distributions
swims = animals.simulate(['swims'], given={'flippers': 1}, n=10)
If we want to create a debiased dataset we can do something like this: There are too many land animals in the animals dataset, we'd like an even representation of land and aquatic animals. All we need to do is simulate from the conditionals and concatenate the results.
import polars as pl
n = animals.n_rows
target_col = 'swims'
other_cols = [col for col in animals.columns if col != target_col]
land_animals = animals.simulate(
other_cols,
given={target_col: 0},
n=n//2,
include_given=True
)
aquatic_animals = animals.simulate(
other_cols,
given={target_col: 1},
n=n//2,
include_given=True
)
df = pl.concat([land_animals, aquatic_animals])
That's it! We introduced a new keyword argument, include_given, which
includes the given conditions in the output so we don't have to add them back
manually.
The draw method
Evaluating simulated data
In- and out-of-table operations
In Lace there are a number of operations that seem redundant. Why is there
simulate and draw; predict and impute? Why is there surprisal when
one can simple compute -logp? The answer is that the are in-table operations
and out-of-table (or hypothetical) operations. In-table operations use the
probability distribution at a certain cell in the PCC table, while out-of-table
operations do not take table location, and thus category and view assignments
into account. Hypothetical operations must marginalize over assignments.
Here is a table listing in-table and hypothetical operations.
| Purpose | In-table | Hypothetical |
|---|---|---|
| Draw random data | draw | simulate |
| Compute likelihood | (-) surprisal | logp |
| Find argmax of a likelihood | impute | predict |
Preparing your data for Lace
Compared with many other machine learning tools, lace has very few requirements
for data: data columns may be integer, continuous, or categorical string types;
empty cells do not not need to be filled in; and the table must contain a row
index column labeled ID.
Note that for categorical columns, lace currently supports up to 256 unique values.
Supported data types for inference
Lace supports several data types, and more can be supported (with some work).
Continuous data
Continuous columns are modeled as mixtures of Gaussian distributions. Find an explanation of the parameters in the codebook
Categorical data
Continuous columns are modeled as mixtures of categorical distributions. Find an explanation of the parameters in the codebook.
Count data
Support exists for a count data type, which is modeled as a mixture of Poission distirbutions, but there are some drawbacks, which make it best to convert the data to continuous.
- The Poisson distribution is a single parameter model so the location and variance of the mixture components cannot be controlled individually. In the Poisson model, higher magnitude means higher variance.
- The hyper prior for count data is finicky and can often cause underflow/overflow errors when the underlying data do not look like Poisson distributions.
Note: If you use Count data do so because you know that the underlying mixture components will be Poisson like and be sure the set the prior and unset the hyperprior in the codebook
Preparing your data for Lace
Lace is pretty forgiving when it comes to data. You can have missing values, string values, and numerical values all in the same table; but there are some rules that your data must follow for the platform to pick up on things. Here you will learn how to make sure that Lace understands your data properly.
Accepted formats
Lace currently accepts the following data formats
- CSV
- CSV.gz (gzipped CSV)
- parquet
- IPC (feather v2)
- JSON (as output by the pandas function
df.to_json('mydata.json)) - JSON Lines
Using a string-based data format
Formatting your data properly will help the platform understand your data.
Under the hood, Lace uses polars for reading data formats into a DataFrame.
For mote information about i/o in polars, see the polars API
documentation.
Here are the rules:
- Real-valued (continuous data) cells must have decimals.
- Integer-values cells, whether count or categorical, must not have decimals.
- Categorical data cells may be integers (up to 255) or string values
- In a CSV, missing cells should be empty
- A row index is required. The index label should be 'ID'.
Not following these rules will confuse the codebook and could cause parsing errors.
Row and column names
Row and column indices or names must be strings. If you were to create a codebook from a csv with integer row and column indices, Lace would convert them to strings.
Tips on creating valid data with pandas
When reading data from a CSV, Pandas will convert integer columns with missing
cells to float values since floats can represent NaN, which is how pandas
represents missing data. You have a couple of options for saving your CSV file
with both missing cells and properly formatted integers:
You can coerce the types to Int64, which is basically Int plus NaN, and
then write to CSV.
df['my_int_col'] = df['my_int_col'].astype('Int64')
df.to_csv('mydata.csv', index_label='ID')
If you have a lot of columns or particularly long columns, you might find it
much faster just to reformat as you write to the csv, in which case you can
use the float_format option in DataFrame.to_csv
df.to_csv('mydata.csv', index_label='ID', float_format='%g')
Codebook reference
The codebook is how you tell Lace about your data. The codebook contains information about
- Row names
- Column names
- The type of data in each column (e.g., continuous, categorical, or count)
- The prior on the parameters for each column
- The hyperprior on the prior parameters for each column
- The prior on the Dirichlet Process alpha parameter
Codebook fields
table_name
String name of the table. For your reference.
state_alpha_prior
A gamma prior on the Chinese Restaurant Process (CRP) alpha parameter assigning columns to views.
Example with a gamma prior
state_alpha_prior:
Gamma:
shape: 1.0
rate: 1.0
view_alpha_prior
A gamma prior on the Chinese Restaurant Process (CRP) alpha parameter assigning rows within views to categories.
Example with a gamma prior
view_alpha_prior:
Gamma:
shape: 1.0
rate: 1.0
col_metadata
A list of columns, ordered by left-to-right occurrence in the data. Contains the following fields:
name: The name of the columnnotes: Optional information about the column. Purely for referencecoltype: Contains information about the type type of data, the prior, and the hyper prior. See column metadata for more informationmissing_not_at_random: a boolean. Iffalse(default), missing values in the column are assumed to be missing completely at random.
row_names
A list of row names in order of top-to-bottom occurrence in the data
notes
Optional notes for user reference
Codebook type inference
When you upload your data, Lace will pull the row and column names from the file, infer the data types, and choose and empirical hyperprior from the data.
Type inference works like this:
- Categorical if:
- The column contains only string values
- Lace will assume the categorical variable can take on any of (and only) the existing values in the column
- There are only integers up to a cutoff.
- If There are only integers in the column
xthe categorical values will be assumed to take on values 0 tomax(x).
- If There are only integers in the column
- The column contains only string values
- Count if:
- There are only integers that exceed the value of the cutoff
- Continuous if:
- There are only integers and one or more floats
Column metadata
- Either
priororhypermust be defined.- If
prioris defined andhyperis not defined, hyperpriors and hyperparameter inference will be disabled.
- If
It is best to leave the hyperpriors alone. It is difficult to intuit what effect the hyperpriors have on the final distribution. If you have knowledge beyond the vague hyperpriors, null out the `hyper` field with a `~` and set the prior instead. This will disable hyperparameter inference inf favor of the expert knowledge you have provided.
Continuous
The continuous type has the hyper field and the prior field. The prior
parameters are those for the Normal Inverse Chi-squared prior on the mean and
variance of a normal distribution.
m: the prior meank: how strongly (in pseudo observations) that we believems2: the prior variancev: how strongly (is pseudo observations) that we believes2
To have widely dispersed components with small variances you would set k very
low and very high.
FIXME: Animation showing effect of different priors
The hyper priors are the priors on the above parameters. They are named for the
parameters to which they are attached, e.g. pr_m is the hyper prior for the
m parameter.
pr_m: Normal distributionpr_k: Gamma distribution with shape and rate (inverse scale) parameterspr_v: Inverse gamma distribution with shape and scale parameterspr_s2: Inverse gamma distribution with shape and scale parameters
- name: Eccentricity
coltype:
Continuous:
hyper:
pr_m:
mu: 0.02465318142734303
sigma: 0.1262297091840037
pr_k:
shape: 1.0
rate: 1.0
pr_v:
shape: 7.0587581525186648
scale: 7.0587581525186648
pr_s2:
shape: 7.0587581525186648
scale: 0.015933939480678149
prior:
m: 0.0
k: 1.0
s2: 7.0
v: 1.0
# To not define the prior add a `~`
# prior: ~
notes: ~
missing_not_at_random: false
Categorical
In addition to prior and hyper, Categorical has additional special fields:
k: the number of values the variable can assumevalue_map: An optional map of integers in [0, ..., k-1] mapping the integer code (how the value is represented internally) to the string value. Ifvalue_mapis not defined, it is usually assume that classes take on only integer values in [0, ..., k-1].
The hyper is an inverse gamma prior on the prior parameter alpha
- name: Class_of_Orbit
coltype:
Categorical:
k: 4
hyper:
pr_alpha:
shape: 1.0
scale: 1.0
value_map:
0: Elliptical
1: GEO
2: LEO
3: MEO
prior:
alpha: 0.5
# To not define the prior add a `~`
# prior: ~
notes: ~
missing_not_at_random: false
Editing the codebook
You should use the default codebook generated by the Lace CLI as a starting point for custom edits. Generally the only edits you will make are
- Adding notes/comments
- changing the
state_alpha_priorandview_alpha_prior(though you should only do this if you know what you're doing) - converting a
Countcolumn to aCategoricalcolumn. Usually there will be no need to change between other column types.
Appendix
Stats primer
What you need to know about Bayesian Statistics
Bayesian statistics is built around the idea of posterior inference. The posterior distribution is the probability distribution of the parameters, \(\theta\), of some model given observed data, \(x\). In math: \( p(\theta | s) \). Per Bayes theorem, the posterior distribution can be written in terms of other distributions,
\[ p(\theta | s) = \frac{p(x|\theta)p(\theta)}{p(x)}. \]
Where \( p(x|\theta) \) is the likelihood of the observations given the parameters of our model; \( p(\theta) \) is the prior distribution, which defines our beliefs about the model parameters in the absence of data; and \( p(x) \) is the marginal likelihood, which the likelihood of the data marginalized over all possible models. Of these, the likelihood and prior are the two distributions we're most concerned with. The marginal likelihood, which is defined as
\[ p(x) = \int p(x|\theta)p(\theta) d\theta \]
is notoriously difficult and a lot of effort in Bayesian computation goes toward making the marginal likelihood go away, so we won't talk about it much.
Finite mixture models
A mixture model is a weighted sum of probabilistic distributions. Here is an example of bimodal mixture model.
This mixture model is defined as the sum of two Normal distributions:
\[ p(x) = \frac{1}{2} N(x; \mu=-3, \sigma^2=1) + \frac{1}{2} N(x; \mu=3, \sigma^2=1). \]
In lace, we will often use the term mixture component to refer to an individual model within a mixture.
In general a mixture distribution has the form
\[ p(x|\theta) = \sum_{i=1}^K w_i , p(x|\theta_i), \]
where \(K\) is the number of mixture components, \(w_i\) is the \(i^{th}\) weight, and all weights are positive and sum to 1.
To draw a mixture model from the prior,
- Draw the weights, \( w \sim \text{Dirichlet}(\alpha) \), where \(\alpha\) is a \(K\)-length vector of values in \((0, \infty)\).
- For \(i \in {1, ..., K}\) draw \( \theta_i \sim p(\theta) \)
As a note, we usually use one \(\alpha\) value repeated \(K\) times rather than \(K\) distinct values. We do not often have reason to think that any one component is more likely than the other, and reducing a vector to one value reduces the number of dimensions in our model.
Dirichlet process mixture models (DPMM)
Suppose we don't know how many components are in our model. Given \(N\) data, there could be as few as 1 and as many as \(N\) components. To infer the number of components or categories, we place a prior on how categories are formed. One such prior is the Dirichlet process. To explain the Dirichlet process we use the Chinese restaurant process (CRP) formalization.
The CRP metaphor works like this: you are on your lunch break and, as one often does, you to usual luncheon spot: a magical Chinese restaurant where the rules of reality do not apply. Today you happen to arrive a bit before open and, as such, are the first customer to be seated. There is exactly one table with exactly one seat. You sit at that table. Or was the table instantiated for you? Who knows. The next customer arrives. They have a choice. They can sit with you or they can sit at a table alone. Customers at this restaurant love to sit together (customers of interdimensional restaurants tend to be very social creatures), but the owners offer a discount to customers who instantiate new tables. Mathematically, a customer sits at a table with probability
\[ p(z_i = k) = \begin{cases} \frac{n_k}{N_{-i}+\alpha}, & \text{if $k$ exists} \\ \frac{\alpha}{N_{-i}+\alpha}, & \text{otherwise} \end{cases}, \]
where \(z_i\) is the table of customer i, \(n_k\) is the number of customers currently seated at table \(k\), and \(N_{-i}\) is the total number of seated customers, not including customer i (who is still deciding where to sit).
Under the CRP formalism, we make inferences about what datum belongs to which category. The weight vector is implicit. That's it. For information on how inference is done in DPMMs check out the literature recommendations.
Glossary
Here we list some of the terminology, including acronyms, you will encounter when using lace.
- view: A cluster of columns within a state.
- category: A cluster of rows within a view.
- component model: The probability distribution defining the model of a specific category in a column.
- state: what we call a lace posterior sample. Each state represents the current configuration (or state) of an independent markov chain. We aggregate over states to achieve estimate of likelihoods and uncertainties.
- metadata: A set of files from which an
Enginemay be loaded - prior: A probability distribution that describe how likely certain hypotheses (model parameters) are before we observe any data.
- hyperprior: A prior distribution on prior. Allows us to admit a larger amount of initial uncertainty and permit a broader set of hypotheses.
- empirical hyperprior: A hyperprior with parameters derived from data.
- CRP: Chinese restaurant process
- DPMM: Dirichlet process mixture model
- PCC: Probabilistic cross-categorization
Refernces
Bayesian statistics and information theory
For an introduction to Bayesian statistics, information theory, and Markov chain Monte Carlo (MCMC), David MacKay's "Information Theory, Inference and Learning Algorithms" 1 is an excellent choice and it's available for free.
MacKay, D. J., & Mac Kay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge university press. (PDF)
Dirichlet process mixture models
For an introduction to infinite mixture models via the Dirichlet process, Carl Rasumssen's "The infinite Gaussian mixture model"2 provides an introduction to the model; and Radford Neal's "Markov chain sampling methods for Dirichlet process mixture models"3 provides an introduction to basic MCMC methods. When I was learning Dirichlet process mixture models, I found Frank Wood and Michael Black's "A nonparametric Bayesian alternative to spike sorting" 4 extremely helpful. Because its target audience is applied scientists it lays things out more simply and completely than a manuscript aimed at statisticians or computer scientists might.
Rasmussen, C. (1999). The infinite Gaussian mixture model. Advances in neural information processing systems, 12. (PDF)
Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of computational and graphical statistics, 9(2), 249-265. (PDF)
Wood, F., & Black, M. J. (2008). A nonparametric Bayesian alternative to spike sorting. Journal of neuroscience methods, 173(1), 1-12. (PDF)
Probabilistic cross-categorization (PCC)
For a compact explanation designed for people unfamiliar with Bayesian statistics, see Shafto, et al 5. This work is targeted at psychologists and demonstrates PCC's power to model human cognitive capabilities. For a incredibly in-dept overview with loads of math, use cases, and examples, see Mansinghka et al 6.
Shafto, P., Kemp, C., Mansinghka, V., & Tenenbaum, J. B. (2011). A probabilistic model of cross-categorization. Cognition, 120(1), 1-25.(PDF)
Mansinghka, V., Shafto, P., Jonas, E., Petschulat, C., Gasner, M., & Tenenbaum, J. B. (2016). Crosscat: A fully bayesian nonparametric method for analyzing heterogeneous, high dimensional data. (PDF)